# Long text reasoning
Qwenlong L1 32B GGUF
Apache-2.0
QwenLong-L1-32B is a large language model designed for long context reasoning. It is trained through reinforcement learning and performs excellently in multiple long context question answering benchmark tests, capable of effectively handling complex reasoning tasks.
Large Language Model
Transformers

Q
Mungert
927
7
Phi 4 Reasoning GGUF
MIT
Phi-4-reasoning is an advanced reasoning model fine-tuned based on Phi-4. Through supervised fine-tuning and reinforcement learning, it demonstrates excellent reasoning abilities in fields such as mathematics, science, and coding.
Large Language Model
Transformers

P
unsloth
6,046
7
Qwen3 14B
Apache-2.0
Qwen3-14B is the latest large language model in the Tongyi Qianwen series, with 14.8 billion parameters. It supports the switching between thinking and non-thinking modes and performs excellently in reasoning, instruction following, and intelligent agent capabilities.
Large Language Model
Transformers

Q
Qwen
297.02k
152
Featured Recommended AI Models